5 research outputs found

    Knowledge Extraction using Capsule Deep Learning Approaches

    Get PDF
    Limited training data, high dimensionality, image (generated from spatiotemporal signals in BCI) complexity and similarity between classes are the main challenges confronting deep learning (DL) methods and can result in suboptimal classification performance. Most DL methods employ Convolutional Neural networks (CNN), which contain pooling in their architecture. Pooling loses valuable information and exact spatial correlations between different entity parts. More importantly, the new viewpoint of an object in the image cannot be preserved by pooling. The Capsule Neural Network (CapsNet) has been introduced to address these shortcomings by preserving the hierarchy between different entity parts in an image, even when using limited training samples [1], [2]. The potential for advancements in CapsNets methods has been demonstrated in the multidisciplinary field, including hyperspectral imaging, image classification, segmentation, video detection, and human movement recognition. Motivated by CapsNet, we have recently developed an end-to-end DL architecture, the Hybrid Capsule Network (HCapsNet), with the state of the art result for hyperspectral image classification while using extremely fewer training samples [3]. Also in another study, the proposed CapsNet architecture yielded encouraging results for the investigation of infant intrinsic movement at various phases of the new experiment in collaboration with the Human Brain and Behavior Lab at Florida Atlantic University (Prof Kelso and colleagues) [4]. The result showed the performance of 2D CapsNets in assessing the spatial relationships between different body parts using 2D histogram features.The non-invasive electroencephalography (EEG) provided by wearable neurotechnology is a massive challenge for AI. My study also focused on developing novel AI algorithms to address the issues involved with decoding EEG signals into control signals for neurotechnology based on brain-computer interfaces (BCIs). These methods are expected to be well-suited for BCI applications, particularly when learning various EEG properties with limited training data (typically the case for BCIs). Upon recent work for decoding imagined speech using CNNs we also applied CapsNet to direct speech BCIs [5]. In this research, the CapsNet architecture is modified using multi-level feature maps and multiple capsule layers. In addition, the new Tier 2 Northern Ireland High-Performance Computing facility enabled us to train models in deep approaches with enormous processing power. Therefore, Massively parallel computing using Asynchronous Successive Halving Algorithm (ASHA) is used for hyperparameter optimisation.Since CapsNet is still in its early stages of development and has demonstrated promising results on several challenging datasets, this method has the potential to develop relationships with colleagues in other disciplines, which could result in new research applicationsReferences[1]S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” in Advances in Neural Information Processing Systems, 2017, pp. 3857–3867. Accessed: Apr. 09, 2022. [Online]. Available: https://proceedings.neurips.cc/paper/2017/hash/2cad8fa47bbef282badbb8de5374b894-Abstract.html[2]G. E. Hinton, S. Sabour, and N. Frosst, “Matrix capsules with {EM} routing,” International Conference on Learning Representations (ICLR), pp. 1–15, 2018, [Online]. Available: https://openreview.net/pdf?id=HJWLfGWRb[3]M. Khodadadzadeh, X. Ding, P. Chaurasia, and D. Coyle, “A Hybrid Capsule Network for Hyperspectral Image Classification,” IEEE J Sel Top Appl Earth Obs Remote Sens, vol. 14, pp. 11824–11839, 2021, doi: 10.1109/JSTARS.2021.3126427.[4]Massoud Khodadadzadeh, Aliza Sloan, Scott Kelso, and Damien Coyle, “2D Capsule Networks Detect Perceived Changes in Infant~Environment Relationship Reflected in 3D Infant Movement Dynamics. Manuscript submitted for publication in Scientific Reports, Nature,” 2023.[5]M. Khodadadzadeh and D. Coyle, “Imagined Speech Classification from Electroencephalography with a Features-Guided Capsule Neural Network.” Dec. 18, 2022. Accessed: Mar. 03, 2023. [Online]. Available: https://pure.ulster.ac.uk/en/publications/imagined-speech-classification-from-electroencephalography-with-a <br/

    A Hybrid Capsule Network for Hyperspectral Image Classification

    Get PDF

    Competing at the Cybathlon championship for people with disabilities: Long-term motor imagery brain-computer interface training of a cybathlete who has tetraplegia

    Get PDF
    BACKGROUND: The brain–computer interface (BCI) race at the Cybathlon championship, for people with disabilities, challenges teams (BCI researchers, developers and pilots with spinal cord injury) to control an avatar on a virtual racetrack without movement. Here we describe the training regime and results of the Ulster University BCI Team pilot who has tetraplegia and was trained to use an electroencephalography (EEG)-based BCI intermittently over 10 years, to compete in three Cybathlon events. METHODS: A multi-class, multiple binary classifier framework was used to decode three kinesthetically imagined movements (motor imagery of left arm, right arm, and feet), and relaxed state. Three game paradigms were used for training i.e., NeuroSensi, Triad, and Cybathlon Race: BrainDriver. An evaluation of the pilot’s performance is presented for two Cybathlon competition training periods—spanning 20 sessions over 5 weeks prior to the 2019 competition, and 25 sessions over 5 weeks in the run up to the 2020 competition. RESULTS: Having participated in BCI training in 2009 and competed in Cybathlon 2016, the experienced pilot achieved high two-class accuracy on all class pairs when training began in 2019 (decoding accuracy > 90%, resulting in efficient NeuroSensi and Triad game control). The BrainDriver performance (i.e., Cybathlon race completion time) improved significantly during the training period, leading up to the competition day, ranging from 274–156 s (255 ± 24 s to 191 ± 14 s mean ± std), over 17 days (10 sessions) in 2019, and from 230–168 s (214 ± 14 s to 181 ± 4 s), over 18 days (13 sessions) in 2020. However, on both competition occasions, towards the race date, the performance deteriorated significantly. CONCLUSIONS: The training regime and framework applied were highly effective in achieving competitive race completion times. The BCI framework did not cope with significant deviation in electroencephalography (EEG) observed in the sessions occurring shortly before and during the race day. Changes in cognitive state as a result of stress, arousal level, and fatigue, associated with the competition challenge and performance pressure, were likely contributing factors to the non-stationary effects that resulted in the BCI and pilot achieving suboptimal performance on race day. Trial registration not registered SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12984-022-01073-9

    Coordination Dynamics meets Active Inference and Artificial Intelligence (CD + AI2):A multi-pronged approach to understanding the dynamics of brain and the emergence of conscious agency

    Get PDF
    How do humans discover their ability to act on the world? By tethering a baby’s foot to a mobile (Fig. 1a) and measuring the motion of both in 3D, we explore how babies begin to make sense of their coordinative relationship with the world and realize their ability to make things happen (N= 16; mean age = 100.33 days). Machine and deep learning classification architectures (e.g., CapsNet) indicate that functionally connecting infants to a mobile via a tether influences the baby movement most where it matters, namely at the point of infant∼world connection (Table 1). Using dynamics as a guide, we have developed tools to identify the moment an infant switches from spontaneous to intentional action (Fig. 1b). Preliminary coordination dynamics analysis and active inference generative modeling indicate that moments of stillness hold important epistemic value for young infants discovering their ability to change the world around them (Fig. 1c). Finally, a model of slow~fast brain coordination dynamics based on a 3D extension of the Jirsa-Kelso Excitator successfully simulated the evolution of tethered foot activity as infants transition from spontaneous to ordered action. By tuning a small number of parameters, this model captures patterns of emergent goal-directed action (Fig. 1d). Meshing concepts, methods and tools of Active Inference, Artificial Intelligence and Coordination Dynamics at multiple levels of description, the CD + AI2 program of research aims to identify key control parameters that shift the infant system from spontaneous to intentional behavior. The potent combination of mathematical modeling and quantitative analysis along with empirical study allow us to express the emergence of agency in quantifiable, lawful terms
    corecore